Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

release-21.2: flowinfra: make max_running_flows default depend on the number of CPUs #75509

Merged
merged 1 commit into from
Jan 25, 2022

Conversation

yuzefovich
Copy link
Member

@yuzefovich yuzefovich commented Jan 25, 2022

Backport 1/1 commits from #71787.

/cc @cockroachdb/release


We think that it makes sense to scale the default value for
max_running_flows based on how beefy the machines are, so we make it a
multiple of the number of available CPU cores. We do so in
a backwards-compatible fashion by treating the positive values of
sql.distsql.max_running_flows as absolute values (the previous
meaning) and the negative values as multiples of the number of the CPUs.

The choice of 128 as the default multiple is driven by the old default
value of 500 and is such that if we have 4 CPUs, then we'll get the value
of 512, pretty close to the old default.

Informs: #34229.

Release note (ops change): The meaning of
sql.distsql.max_running_flows cluster setting has been extended so
that when the value is negative, it would be multiplied by the number of
CPUs on the node to get the maximum number of concurrent remote flows on
the node. The default value is -128, meaning that on a 4 CPU machine we
will have up to 512 concurrent remote DistSQL flows, but on a 8 CPU
machine up to 1024. The previous default was 500.

Release justification: low-risk change to reduce the number of "no inbound
stream connection" errors.

We think that it makes sense to scale the default value for
`max_running_flows` based on how beefy the machines are, so we make it a
multiple of the number of available CPU cores. We do so in
a backwards-compatible fashion by treating the positive values of
`sql.distsql.max_running_flows` as absolute values (the previous
meaning) and the negative values as multiples of the number of the CPUs.

The choice of 128 as the default multiple is driven by the old default
value of 500 and is such that if we have 4 CPUs, then we'll get the value
of 512, pretty close to the old default.

Release note (ops change): The meaning of
`sql.distsql.max_running_flows` cluster setting has been extended so
that when the value is negative, it would be multiplied by the number of
CPUs on the node to get the maximum number of concurrent remote flows on
the node. The default value is -128, meaning that on a 4 CPU machine we
will have up to 512 concurrent remote DistSQL flows, but on a 8 CPU
machine up to 1024. The previous default was 500.
@yuzefovich yuzefovich requested review from RaduBerinde, michae2 and a team January 25, 2022 19:27
@blathers-crl
Copy link

blathers-crl bot commented Jan 25, 2022

Thanks for opening a backport.

Please check the backport criteria before merging:

  • Patches should only be created for serious issues or test-only changes.
  • Patches should not break backwards-compatibility.
  • Patches should change as little code as possible.
  • Patches should not change on-disk formats or node communication protocols.
  • Patches should not add new functionality.
  • Patches must not add, edit, or otherwise modify cluster versions; or add version gates.
If some of the basic criteria cannot be satisfied, ensure that the exceptional criteria are satisfied within.
  • There is a high priority need for the functionality that cannot wait until the next release and is difficult to address in another way.
  • The new functionality is additive-only and only runs for clusters which have specifically “opted in” to it (e.g. by a cluster setting).
  • New code is protected by a conditional check that is trivial to verify and ensures that it only runs for opt-in clusters.
  • The PM and TL on the team that owns the changed code have signed off that the change obeys the above rules.

Add a brief release justification to the body of your PR to justify this backport.

Some other things to consider:

  • What did we do to ensure that a user that doesn’t know & care about this backport, has no idea that it happened?
  • Will this work in a cluster of mixed patch versions? Did we test that?
  • If a user upgrades a patch version, uses this feature, and then downgrades, what happens?

@cockroach-teamcity
Copy link
Member

This change is Reviewable

Copy link
Collaborator

@michae2 michae2 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

:lgtm:

Reviewable status: :shipit: complete! 1 of 0 LGTMs obtained (waiting on @RaduBerinde)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants